![]() low power integrated circuit for analyzing a digitized audio stream
专利摘要:
LOW POWER INTEGRATED CIRCUIT TO ANALYZE A SCANNED AUDIO FLOW. The examples reveal a low-power integrated circuit for receiving and digitizing an audio stream. Additionally, the examples provide the low-power integrated circuit to compare the digitized audio stream with a keyword and store the digitized audio stream in memory. Additionally, the examples also reveal from the recognition of the keyword in the digitized audio stream, the low power integrated circuit transmits a signal to a processor to increase power and analyze the digitized audio stream. 公开号:BR112014013832B1 申请号:R112014013832-0 申请日:2011-12-07 公开日:2021-05-25 发明作者:Eric Liu;Seung Wook Kim;Stefan J Marti 申请人:Qualcomm Incorporated; IPC主号:
专利说明:
BACKGROUND Computing devices have gained sophistication for users by processing audio instructions and providing answers. Users can recite audio instructions that can be used to control these computing devices. For example, users can talk to computing devices to provide information, such as instructions to provide directions to a specific location. BRIEF DESCRIPTION OF THE DRAWINGS [0001] In the accompanying drawings, like numerals refer to like components or blocks. The following detailed description makes reference to the drawings, in which: [0002] Figure 1 is a block diagram of an exemplary computing device including a low-power integrated circuit for analyzing an audio stream and a processor for analyzing a digitized audio stream in response to detection of a keyword by the integrated circuit. [0003] Figure 2 is a block diagram of an exemplary low power integrated circuit for analyzing an audio stream and transmitting a signal to a processor to increase power when a keyword is detected in the audio stream; [0004] Figure 3 is a block diagram of an exemplary computing device for analyzing a digitized audio stream and a server in communication with the computing device for analyzing a text stream generated from the digitized audio stream; [0005] Figure 4 is a flowchart of an exemplary method performed on a computing device to receive an audio stream and determine a response; and [0006] Figure 5 is a flowchart of an exemplary method performed in a computing device to compress a digitized audio stream and present a response. DETAILED DESCRIPTION [0007] In processing audio information, a user typically activates the application to process audio by compressing a button and/or reciting instructions. When launching the audio processing application, the user additionally needs to recite explicit instructions that he would want a computing device to carry out. Thus, processing speech instructions from a user can be time-consuming and repetitive. Furthermore, continuously monitoring instructions from the consumer consumes a lot of energy, draining the battery. [0008] To address these problems, the exemplary modalities disclosed here utilize a low-power integrated circuit to continuously monitor the occurrence of a keyword in an audio stream (eg, user speech), while relying on a processor for more complete analysis of user speech. For example, several examples disclosed here provide for receiving an audio stream on a low-power integrated circuit, digitizing the audio stream, and analyzing the digitized audio stream to recognize a keyword. Upon recognizing the keyword within the digitized audio stream, the integrated circuit sends a signal to the processor to increase power. By increasing power to the processor, the digitized audio stream is retrieved to determine a response. This decreases the amount of time it takes the user to launch the specific audio processing application and prevents the user's speech from repeating. Determining the response from the retrieved audio stream prevents the user from providing additional explicit instructions for the computing device to perform speech analysis. [0009] Additionally, in the various examples disclosed here, by increasing the power to the processor, the processor retrieves the digitized audio stream from a memory and converts the digitized audio stream into a text stream. After converting to text stream, the processor determines an answer based on the text within the text stream. Determining the response from the text stream reduces the time for the computing device user to instruct the computing device. Additionally, the processor can determine the appropriate response based on the context of the audio stream. Additionally, the computing device determines which application needs to run to fill in the response for the user. Additionally, by increasing the power to the processor by recognizing the keyword within the digitized audio stream, the computing device consumes less power while listening to the user's speech. [0010] In one embodiment, the computing device can also determine the response by receiving the response from a server or from the processor. In an additional modality, the memory keeps the stored digitized audio stream for a predetermined period of time. In this mode, the processor can retrieve the digitized audio stream in time increments. For example, the processor can retrieve the complete digitized audio stream or it may retrieve a shorter time interval from the digitized audio stream. Retrieving the digitized audio stream allows the processor to analyze the context of the audio stream to determine the appropriate response. [0011] In this way, exemplary embodiments disclosed here save a user's time by preventing repetitive audio instructions to a computing device since the computing device determines an appropriate response based on the context of the audio stream. Additionally, the computing device consumes less power while receiving and processing the audio streams. [0012] Referring now to the drawings, Figure 1 is a block diagram of an exemplary computing device 100 including a low power integrated circuit 104 for receiving an audio stream 102 and a digitizing module 106 for digitizing the audio stream. audio to provide the digitized audio stream 114 to a memory 112. Additionally, the low power integrated circuit 104 includes a keyword compare module 108 for comparing the digitized audio stream 114 with a keyword and based on upon recognition of the keyword, transmit a signal 116 to a processor 118 to increase power 122. In addition, the processor includes an analysis module 120 for analyzing the digitized audio stream 114. Computing device embodiments 100 include an apparatus client, personal computer, desktop computer, laptop, mobile device or other computing device suitable to include components 104, 112 and 118. [0013] The audio stream 102 is received by the computing device 100, specifically, the low power integrated circuit 104. The audio stream 102 is an input analog signal that is digitized 106 to provide the digitized audio stream 114. Modalities of audio stream 102 include speech from a user or audio from another computing device. For example, there may be multiple computing devices 300 receiving 102 audio streams, which can be confusing. Thus, computing devices may designate a device as a central point for receiving the audio stream 102. In this embodiment, the low power integrated circuit 104 operates as part of an ad-hoc network which may be a central unit of one or more computing devices. [0014] For example, the user can discuss with another person the shortest route from New York to Los Angeles, California. In this example, the audio stream would be the discussion of the shortest route from New York to Los Angeles. In a further embodiment, audio stream 102 may include audio for a predetermined period of time. For example, audio stream 102 may include a few seconds or minutes when received by low power integrated circuit 104. In that example, low power integrated circuit 104 can distinguish audio stream 102 from other audio streams 102. [0015] The low power integrated circuit 104 includes module 106 for digitizing the audio stream 102 and module 108 for comparing the digitized audio stream 114 with the keyword. Low power integrated circuit 104 is an electronic circuit with patterned trace elements on the surface of a material that form interconnections between other electronic components. For example, low power integrated circuit 104 forms connections between processor 118 and memory 112. Modalities of low power integrated circuit 104 include a microchip, chipset, electronic circuit, chip, microprocessor, semiconductor, microcontroller, or other electronic circuit capable of receiving audio stream 102 and transmitting signal 116. Low power integrated circuit 104 can continuously monitor audio stream 102, use digitizing module 106 to digitize the audio stream, and store the digitized audio stream in memory 112. As such, additional embodiments of the low power integrated circuit 104 include a transmitter, receiver, microphone, or other component suitable for receiving the audio stream 102. [0016] The digitized audio stream in module 106 to provide the digitized audio stream 114. The digitizing module 106 converts the audio stream into a discrete-time signal representation. Modalities of digitizing module 106 include an analog to digital converter (ADC), digital conversion device, instruction, firmware and/or software operating in conjunction with low power integrated circuit 104. For example, digitizing module 106 may include an electronic device for converting an analog input voltage into a digital number proportional to the magnitude of the analog signal. [0017] When the audio stream 102 is digitized in module 106, it is compared with the keyword in module 108. The audio stream 102 and in module 108 is compared with the keyword which operates as an indication for signal 116 processor 118 to increase power 122 and obtain digitized audio stream 114 for analysis in module 120. Modalities of 108 include an instruction, process, operation, logic, algorithm, technique, logic function, firmware, and/or software. When the keyword is recognized, low power integrated circuit 104 transmits signal 116 to increase power 122 to processor 118. [0018] Keyword modalities include a digital signal, analog signal, pattern, database, commands, directions, instructions, or other representation to compare in module 108. For example, the user of a computing device can discuss the difference between a shrimp and a friendly common prawn and subsequently wanting to conduct a web search to identify the answer. As such, the user can declare the predetermined keyword to trigger keyword recognition via keyword compare module 108 and subsequent analysis of the above discussion via analysis module 120. [0019] The keyword may include, for example, a phrase, a unique keyword, or a single keyword that is private to the user of the computing device. According to the previous example, the keyword might be the phrase, “Computer, what do you think ”. In this example, the phrase causes low power integrated circuit 104 to send signal 116 to processor 118 to obtain digitized audio stream 114 which could include the audio before or after the phrase. Thus, the user does not need to repeat the instructions as the processor 118 analyzes the digitized audio stream 114 to determine the context of the audio stream 102 for an appropriate response. Still, in an additional example, the unique keyword might include “Shazam”. Thus, as a specific example, when the user speaks the word "Shazam", circuit 104 can detect the keyword and transmit signal 116 to instruct processor 118 to obtain digitized audio stream 114 and convert the stream into a text flow. Assuming the text flow is an instruction to compose a text message to the user's mother, the appropriate response would be to compose the text message. Thus, as described above, using the predetermined keyword(s), the low power integrated circuit 104 recognizes when the user of the computing device needs to complete an additional response, such as directions or perform a network search. [0020] In a further modality of module 108, when no keyword is recognized within digitized audio stream 114, low power integrated circuit 104 continues to monitor towards another audio stream 102 that is digitized in module 106 and stored in memory 112. In yet a further embodiment, the low power integrated circuit 104 compresses the digitized audio stream 114 and that compressed digitized audio stream is used to recognize the keyword by comparing it to the keyword in module 108. The memory 112 stores and/or maintains the digitized audio stream 114. Embodiments of the memory 112 may include a temporary memory store, cache, non-volatile memory, volatile memory, random access memory (RAM), a Memory of Electrically Erasable Programmable Read (EEPROM), storage unit, Compact Disc Read Memory (CDDROM), or other memory capable of storing and/or holding the digitized audio stream 114. [0022] The digitized audio stream 114 is stored in memory 112. Arrangements may include the low power integrated circuit 104 compressing the audio stream 102 after the digitizing module 106 obtains a compressed digitized audio stream prior to placement in memory 112. Although Figure 1 illustrates digitized audio stream 114 stored in memory 112, digitized audio stream may also be stored in a memory on low power integrated circuit 104. In a further embodiment, digitized audio stream 114 includes a predetermined amount of time of an audio stream 102. In this embodiment, when the audio stream 102 is received for a predetermined period of time, such as a few seconds or minutes, that predetermined period of time of the audio stream 102 is digitized and stored in memory 112 for processor 118 to obtain and/or retrieve. Additionally in this mode, when another audio stream 102 is received by low power integrated circuit 104 and digitized, the previous digitized audio stream in memory is replaced by the most current digitized audio stream 114. Thus, processor 118 obtains and/or retrieves the most current audio stream 102. In this mode, memory operates as a temporary first-in, first-out buffer to provide the most current audio stream 102. [0023] Signal 116 is transmitted from low power integrated circuit 104 to processor 118 from the recognition of the keyword within digitized audio stream 114. Signal 116 instructs processor 118 to increase power 22 and analyze digitized audio stream 114 from memory 112. Signal modes 116 include a communication, transmission, electrical signal, instruction, digital signal, analog signal, or other type of communication to increase power 122 to processor 118. An embodiment The additional signal 116 includes an interrupt transmitted to processor 118 from recognition of the keyword within digitized audio stream 114. [0024] Processor 118 receives signal 116 to increase power 122 and obtains digitized audio stream 114 for analysis in module 120. Modalities of processor 118 may include a central processing unit (CPU), visual processing unit (VPU) ), microprocessor, graphics processing unit (GPU), or other programmable device suitable for analyzing 120 the digitized audio stream 114. [0025] When processor 118 obtains digitized audio stream 114 from memory 112, processor analyzes digitized audio stream 114 in module 120. Modalities of analysis module 120 include an instruction, process, operation, logic, algorithm , technique, logic function, firmware, and/or software that processor 118 can search, decode, and/or execute. Additional modalities of module 120 include converting digitized audio stream 114 into a text stream to determine an appropriate response based on the context of audio stream 112. Additional modalities of module 120 include determining a response to present to the user of the computing device 100 as will be seen in later figures. [0026] Power source 122 provides electrical energy in the form of electrical potential to processor 118. Specifically, power source 122 increases electrical energy to processor 118 by receiving signal 116 from low power integrated circuit 104. increasing power 122 for processor 118, processor 118 is awakened or activated to obtain digitized audio stream 114. Arrangements of power source 122 include a power supply, power management device, battery, energy storage medium , electromechanical system, solar power, power take-off, or other device capable of providing power 122 to processor 118. In a further embodiment, power source 122 provides electrical power to computing device 100. [0027] Referring now to Figure 2, the same is a block diagram of an exemplary low power integrated circuit 204 for analyzing an audio stream 202 and transmitting a signal 216 to a processor to increase power when a keyword is detected in audio stream 202. Low power integrated circuit 204 includes circuitry 210 for producing a digitized audio stream 214 using digitization circuitry 206 and detects the keyword via comparison circuitry 208 , and from recognition of the keyword in the digitized audio stream 214, transmits a signal 216. [0028] The audio stream 202 is received by the low power integrated circuit 204. The audio stream 202 may be similar in structure to the audio stream 102 of Figure 1. [0029] The low power integrated circuit 204 includes the circuitry 210 for digitizing the audio stream 202 and comparing the digitized audio stream 214 with a keyword. The low power integrated circuit 204 may be similar in functionality and structure to the low power integrated circuit 104 as above in Figure 1. [0030] Circuitry 110 includes digitizing circuitry 206 and comparison circuitry 208. Modalities of circuitry 210 include logic, analog circuitry, electronic circuitry, digital circuitry, or other circuitry able to digitize the audio stream 102 and compare the digitized audio stream 214 with the keyword. In additional embodiments, the circuitry includes an application and/or firmware that can be used independently and/or in conjunction with the low power integrated circuit 204 to search, decode and/or execute the circuitry 206 and 208. [0031] The audio stream 202 is received and digitized by the circuitry 206 to produce the digitized audio stream 214. The digitization circuitry 206 is a type of conversion for the audio stream 202. digitizing circuitry 206 may be similar in functionality to digitizing module 106 as described in connection with Figure 1. [0032] The low power integrated circuit 204 receives the audio stream 202 for digitization in circuitry 206 and produces the digitized audio stream 214. The digitized audio stream 214 may be similar in structure to the digitized audio stream 114 as described in connection with Figure 1. Additionally, while Figure 2 illustrates digitized audio stream 214 outside low power integrated circuit 204, digitized audio stream 214 may also be located within low power integrated circuit 204. digitized audio stream 214 located within low power integrated circuit 204 is used in circuitry 208 for comparison with a keyboard. In another embodiment, the digitized audio stream 214 is stored and/or maintained in a memory. [0033] The circuitry 208 included in the circuitry 210 of the low power integrated circuit 204 compares the digitized audio stream 214 with the keyword. Additionally, 208 is used to recognize the keyword within digitized audio stream 214 to transmit signal 216 to increase power to the processor. Comparison circuitry 208 may be similar in functionality to module 108 as described in connection with Figure 1. [0034] Signal 216 instructs a device to increase power from keyword recognition within digitized audio stream 214 via comparison circuitry 208. Signal 216 may be similar in structure and functionality to the signal. 116 of Figure 1. One embodiment of signal 216 includes instructing a processor to power up and analyze the digitized audio stream 214 from memory. In this embodiment, signal 216 instructs the processor to obtain digitized audio stream 214 to analyze and determine a response based on keyword recognition in circuitry 208. [0035] Figure 3 is a block diagram of an exemplary computing device 300 for analyzing a digitized audio stream 314 and a server 326 in communication with the computing device 300 for analyzing a text stream 324 generated from the stream. of digitized audio 314. Computing device 300 includes a low power integrated circuit 304, a memory 312, a processor 318, an output device 328, and a server 326. Specifically, Figure 3 illustrates the processed text stream 324. by server 326 or processor 318 to present a response to a user of the computing device at output device 324. Computing device 300 may be similar in structure and functionality to computing device 100 as described in connection with Figure 1. [0036] The audio stream 302 is received by the computing device 300, specifically, the low power integrated circuit 304. The audio stream 302 may be similar in structure to the audio stream 102 and 202 in Figure 1 and Figure 2, respectively. [0037] The low power integrated circuit 304 includes a digitization module 306 and an analysis module 308. In one embodiment, the low power integrated circuit 304 includes circuitry to comprise modules 306 and 308. The low power integrated circuit power 304 may be similar in structure and functionality to the low power integrated circuit 104 and 204 described in connection with Figure 1 and Figure 2, respectively. [0038] Audio stream 302 when received by computing device 300 is digitized 306 to produce a digitized audio stream 314. Digitizing module 306 may be similar in structure and functionality to digitizing module 106 and digitizing circuitry 206 in Figure 1 and Figure 2, respectively. In a further embodiment, when the audio stream 302 is digitized in module 306, the low power integrated circuit 304 transmits the digitized audio stream 314 to memory 312 for storing and/or maintaining. [0039] When the audio stream 314 is digitized, the low power integrated circuit analyzes the digitized audio stream 314 in the module 308. In one modality, the module 308 compares a keyword with the digitized audio stream 114. mode, 308 includes the functionality of the compare module 108 as above in Figure 1. [0040] The memory 312 stores the digitized audio stream 314 from the low power integrated circuit 304. In one embodiment, the memory 312 holds the received digitized audio stream 314 for a predetermined period of time. For example, audio stream 302 can be monitored for the predetermined time of few seconds and as such, these few seconds of audio stream 302 are digitized in module 306 and sent to memory 312. In this example, memory 312 stores the digitized audio stream 314 of the few seconds to be retrieved and/or obtained by the processor 318 for analysis upon receiving the 316 signal. Also, in this example, when another 302 audio stream of a few seconds is received and digitized, that other stream The 314 digitized audio stream replaces the 314 digitized audio stream. This allows the 312 memory to hold the latest 302 audio stream for the 318 processor to retrieve and/or retrieve. Memory 312 may be similar in structure and functionality to memory 112 as described in connection with Figure 1. [0041] Audio stream 302 is digitized 306 to produce digitized audio stream 314. Digitized audio stream 314 is stored and/or maintained in memory 312. In one embodiment, processor 318 obtains digitized audio stream 314. for analysis in module 320 upon receiving signal 316. Digitized audio stream 314 may be similar in structure and functionality to digitized audio stream 114 and 214 as described in connection with Figure 1 and Figure 2 respectively. [0042] Signal 316 is a transmission from low power integrated circuit 304 to processor 316 to increase power 322. In a mode of signal 316, processor 316 is instructed to obtain digitized audio stream 314 for analysis in module 320. Signal 316 may be similar in structure and functionality to signal 116 and 216 as described in connection with Figure 1 and Figure 2, respectively. [0043] Power supply 322 provides electrical power to processor 318 and/or computing device 300. Power supply 322 may be similar in structure and functionality to power supply 122, as described in connection with Figure 1. [0044] Processor 318 includes analysis module 320 and text stream 324. Specifically, processor 318 receives signal 316 to increase power 322. Upon receiving this signal 316, processor 318 obtains digitized audio stream 314 to analysis in module 320. In an additional embodiment, processor 318 converts digitized audio stream 314 to text stream 324. In that embodiment, text within text stream 324 determines a response to computing device 300. text is a sequence of finite sequence of symbols or representations from an alphabet, numbered set or alphanumeric set. For example, the digitized audio stream 314 may be in a binary language, so the processor transforms the bytes of the binary representation into a word. In a further example, the digitized audio stream 314 may be in a language representative of words and/or numbers, thus processor 318 converts that language into text that processor 318 understands. Response modalities include performing a web search, dialing a phone number, opening an application, recording text, streaming media, composing a text message, listening to an orientation, or speaking directions. In a further embodiment, processor 318 determines the response to present to a user of computing device 300. Processor 318 may be similar in structure and functionality to processor 118 as described in connection with Figure 1. [0045] Processor 318 analyzes the digitized audio stream 314 stored in module 320. Modalities of analysis module 320 include transmitting the digitized audio stream 314 obtained from memory 314 to server 326. Other modes of module 320 include converting digitized audio stream 314 obtained from memory 312 to text stream 324 and transmitting text stream 324 to server 326. Other embodiments of module 320 include converting digitized audio stream 314 to text stream 324 to determine the appropriate response by analyzing the context of audio stream 302. For example, digitized audio stream 314 can be converted to text stream 324 in module 320 and processor 318 can use natural language processing to analyze text within the text stream 324 to determine the appropriate response based on the context of the audio stream 302. [0046] Text stream 324 includes text to determine the appropriate response for computing device 300. In one embodiment, text stream 324 is processed by the processor to determine the appropriate response to be presented to the user of computing device 300. at output device 328. In another embodiment, text stream 324 is processed by server 326 to determine the appropriate response that is transmitted to computing device 300. In that embodiment, the response is sent from server 326 to the computing device. computing 300. In a further embodiment, computing device 300 presents the response to the user of computing device 300. For example, text stream 324 may include text that discusses sending a text message to the mother. Thus, text within text stream 324 determines computing device 300 to respond by composing the text message to the mother. [0047] Server 326 provides services over a network and may include, for example, a network server, a local area network (LAN) server, a file server, or any other computing device suitable for processing the text stream 324 to transmit the response to computing device 300. [0048] Output device 328 presents the response as determined from the text within text stream 324 to the user of computing device 300. Modalities of output device 328 include a display device, a screen or a speaker for presenting the response to a user of computing device 300. In accordance with the text message for the mother example, the user of computing device 300 may have a display that shows the text message being composed to the mother and/or speaker to communicate the text message to the user. [0049] Turning now to Figure 4, a flowchart of an exemplary method performed on a computing device to receive an audio stream and determine a response. Although Figure 4 is depicted as being performed on computing device 100 as in Figure 1, it may also be performed on other suitable components as will be apparent to those skilled in the art. For example, Figure 4 can be implemented in the form of executable instructions on a machine-readable storage medium such as memory 112. [0050] In operation 402, the computing device operating in conjunction with a low power integrated circuit receives an audio stream. In one modality, the audio stream is for a predetermined amount of time. For example, the audio stream can be a few seconds or milliseconds. In this mode, the computing device can monitor the audio continuously. In other embodiments, the audio stream includes at least one of speech from one user or audio from the other computing device. [0051] In operation 404, the low power integrated circuit operating in conjunction with the computing device digitizes the received audio stream in operation 402 to produce a digitized audio stream. Modalities of 404 operation include the use of an analog to digital converter (ADC), digital conversion device, instruction, firmware and/or software operating in conjunction with the low power integrated circuit. Modes of operation 404 include transmitting the digitized audio stream to a memory. Additional 404 modes include compressing the received audio stream in 402 operation, while another 404 mode includes compressing the digitized audio stream. [0052] In operation 406, the digitized audio stream produced in operation 404 is stored in memory. Modalities of operation 406 include memory storage and/or maintenance of the digitized audio stream. In another mode of operation 406, the audio stream received during the predetermined amount of time in operation 402 is digitized in operation 404, thus when another data stream is received in operation 402 and digitized in operation 404, that audio stream is digitized in operation. The current one replaces the previous digitized audio stream. In this mode, the memory holds the stored digitized audio stream received for the predetermined period of time before the current time. [0053] In operation 408, the low power integrated circuit analyzes the digitized audio stream produced in operation 404. Modalities of operation 408 include processing the digitized audio stream while other modalities include comparing the digitized audio stream with a keyword . In these modes of operation 408, the low power integrated circuit processes the digitized audio stream for the keyword. From the recognition of the keyword within the digitized audio stream, the method moves to operation 410 to transmit a signal. In an additional modality, if the low power integrated circuit does not recognize the keyword within the digitized audio stream, the method returns to operation 402. However, an additional modality includes comparing the digitized audio stream as an analog representation or digital sign indicating that the user of the computing device wants an answer from the computing device. Still in an additional modality, operations 402, 404, 406 and 408 occur in parallel. For example, if the computing device analyzes the digitized audio stream in 408, the integrated circuit continues to receive the audio streams in operation 402, digitizing and storing the audio stream in operations 404 and 406. [0054] In operation 410, the low power integrated circuit transmits the signal to the processor to increase power. Specifically, upon recognizing the keyword within the digitized audio stream, the low-power integrated circuit transmits a signal to the processor to increase power. In an operating mode 410, the processor increases the power or electrical energy supplied to the processor and/or computing device. [0055] In operation 412, the processor obtains the digitized audio stream stored from memory in operation 406. In an operation mode 412, the memory transmits the digitized audio stream to the processor, while in another operation mode 412, the processor retrieves the digitized audio stream from memory. [0056] In operation 414, the processor converts the digitized audio stream obtained in operation 412 into a text stream. After converting the digitized audio stream to a text stream, the processor analyzes the text within the text stream to determine the appropriate response. Modalities of the 414 operation include using speech to text (STT), voice to text, digital to text, or other type of text conversion. An additional mode of operation 414 includes utilizing natural language processing after conversion to text stream. In this mode, the computing device processes the text within the text stream to determine an appropriate response based on the context of the audio stream received in operation 402. For example, upon detecting the keyword within the digitized audio stream at 408 , the processor gets in operation 412, and the digitized audio stream is converted to text stream in operation 414. In a further example, the audio stream may include a conversation regarding directions between two locations, thus when that stream If digitized audio is converted in operation 412 to text stream, the processor can determine the appropriate response by analyzing the text within the text stream. [0057] In operation 416, the processor determines the response based on the text stream produced in operation 414. Response modalities include performing a network search, dialing a telephone number, opening an application, recording text, streaming media , compose a text message, list directions, or speak directions. In one modality, the text within the text stream determines the appropriate response for the processor. In an additional modality, the answer is presented to a user of the computing device. For example, the text flow might include speech asking how to get to China and how such directions to China would be the appropriate answer. Additionally, in this example, a map showing listing and/or giving directions to China could be included. [0058] Referring now to Figure 5, a flowchart of an exemplary method performed on a computing device to compute a digitized audio stream and present a response to a user of the computing device. Although Figure 5 is depicted as performed on computing device 300 as above in Figure 3, it can be performed on other suitable components as will be apparent to those skilled in the art. For example, Figure 5 can be implemented in the form of executable instructions on a machine-readable storage medium such as memory 312. [0059] In operation 502, the computing device compresses a digitized audio stream. In one embodiment, operation 502 is performed in conjunction with operation 404 before operation 406 in Figure 4. For example, when digitizing the received audio stream, a low power integrated circuit operation in conjunction with the computing device can compress the digitized audio stream to reduce the data byte size in the stream. In this example, compression of the digitized audio stream occurs before it is stored in memory in operation 406. In an additional modality, operation 502 is performed before receiving the digitized audio stream in operation 412 in Figure 4. For example, the processor may perform operation 502 to compress the digitized audio stream from memory while in another example, the memory may compress the digitized audio stream before the processor obtains the digitized audio stream. In yet a further modality of operation 502, the compressed digitized audio stream is analyzed to recognize a keyword as in step 408 in Figure 4. [0060] In operation 504, the computing device presents a response to the user of the computing device. Modalities of operation 504 include occurring during or after operation 416 in Figure 4. For example, when the processor determines the appropriate response, that response can be presented to the user of the computing device. In a further embodiment, the response can be presented to the user on an output device, such as a display screen or speaker, operating in conjunction with the computing device. For example, when the user discusses the difference between a shrimp and a prawn, the processor can launch a web search application, thereby performing a net search for the difference between prawn and prawn. The network search performed can be provided on the display device of the computing device to the user. In a further example, the computing device audibly recites the differences between shrimp and prawns through a speaker to the user. In these modalities, the computing device operates with an audio stream to determine a response more properly than the user instructing the computing device. [0061] The modalities described here in detail refer to digitizing an audio stream to detect a keyword and based on the recognition of the keyword within the digitized audio stream, transmitting a signal to a processor to increase power and further analyzing the digitized audio stream to determine a response. In this way, exemplary modalities save user time by preventing repetitive audio instructions to a computing device, while reducing the computing device's power consumption.
权利要求:
Claims (15) [0001] 1. Method performed by a computing device (100, 300) including a low power integrated circuit (104, 204, 304) and a processor, the method comprising: receiving (402) an audio stream (102, 202 , 302); continuously monitor the audio stream through the low power integrated circuit; digitize (404) the audio stream; store the digitized audio stream (114, 214, 314) in a memory, in which to store the audio stream. digitized audio comprises replacing a previous digitized audio stream stored in memory with the digitized audio stream; analyzing (408), through the low power integrated circuit, the stored digitized audio stream for recognition of a keyword; of the keyword within the stored digitized audio stream, transmit (410) a signal (116, 216, 316) to increase power (122, 322) via the low power integrated circuit (104, 204, 304), for the processor ( 118, 318); obtaining, by the processor, the stored digitized audio stream from memory; and alert, by the processor, the stored digitized audio stream to determine (416) an action for the computing device. [0002] 2. The method of claim 1, further comprising converting the digitized audio stream stored by the processor into a text stream (324), and wherein the action for the computing device is based on the stream of text. [0003] Method according to claim 2, characterized in that the stored digitized audio stream comprises the keyword, and the action to the computing device is determined based on the text stream corresponding to the audio before the keyword. [0004] The method of claim 1, further comprising: compressing (502) the digitized audio stream into a compressed digitized audio stream, wherein the analysis comprises analyzing the compressed digitized audio stream for word recognition. key. [0005] Method according to claim 2, characterized in that the text stream comprises a sequence of symbols that the processor is configured to understand. [0006] 6. Method according to claim 1, characterized in that the action includes at least one of performing a network search, dialing a telephone number, opening an application, recording text, running media streaming, composing a text message, listing directions, or speaking directions. [0007] The method of claim 1, further comprising: maintaining, in memory, the received digitized audio stream for a period of time prior to a current time. [0008] 8. Method according to claim 2, characterized by determining the action for the computing device to comprise: transmitting the text stream to a server; receiving a response from the server based on the analysis of the text stream by the server, and determine the action based on the processor's analysis of the response. [0009] The method of claim 1, characterized in that the audio stream includes at least one of speech from a user, speech from another computing device, and audio from another computing device. [0010] 10. Computing device (100, 300) characterized by comprising a low power integrated circuit (104, 204, 304) and a processor (118, 318), the low power integrated circuit (104, 204, 304) configured to : upon receiving an audio stream (102, 202, 302), continuously monitoring and digitizing the audio stream; storing the digitized audio stream (114, 214, 314) in a memory, wherein storing the digitized audio stream comprises replace a previous digitized audio stream stored in memory with the digitized audio stream; analyze the stored digitized audio stream for recognition of a keyword; upon keyword recognition in the stored digitized audio stream, increase power (122, 322) for the processor (118, 318); and the processor configured to: retrieve the stored digitized audio stream from memory; and analyzing the stored digitized audio stream (114, 214, 314) to determine an action to be taken by the computing device. [0011] The computing device of claim 10, characterized in that the stored digitized audio stream comprises the keyword and audio before the keyword, and wherein the action for the computing device is determined on the basis of audio before the keyword. [0012] The computing device of claim 10, characterized in that, before analyzing the digitized audio stream, the processor is further configured to: transmit the digitized audio stream or a text stream generated from the audio stream scanned to a server to determine an action; and receive a response from the server indicating the action to be taken by the computing device. [0013] The computing device of claim 10, wherein the low power integrated circuit is further configured to: compress the digitized audio stream to obtain a compressed digitized audio stream, and parse the compressed digitized audio stream to recognize the keyword. [0014] The computing device of claim 10, further comprising: an output device for presenting the response to a user of the computing device. [0015] The computing device of claim 10, characterized in that to analyze the digitized audio stream to recognize the keyword, the low power integrated circuit is configured to compare the digitized audio stream with the keyword .
类似技术:
公开号 | 公开日 | 专利标题 BR112014013832B1|2021-05-25|low power integrated circuit for analyzing a digitized audio stream TWI525532B|2016-03-11|Set the name of the person to wake up the name for voice manipulation US20170200445A1|2017-07-13|Speech synthesis method and apparatus US10854199B2|2020-12-01|Communications with trigger phrases CN108055617B|2020-12-15|Microphone awakening method and device, terminal equipment and storage medium CN111566730A|2020-08-21|Voice command processing in low power devices US20130335223A1|2013-12-19|Electronics theft deterrent system US20190057701A1|2019-02-21|Speech recognition method and device CN106980640B|2020-04-24|Interaction method, device and computer-readable storage medium for photos CN105786671B|2019-01-22|A kind of detection generates the method, apparatus and calculating equipment of power consumption application JP6833659B2|2021-02-24|Low power integrated circuit for analyzing digitized audio stream JP2020098342A|2020-06-25|Low power integrated circuit for analyzing digitized audio streams JP6867939B2|2021-05-12|Computers, language analysis methods, and programs CN106228047B|2019-04-12|A kind of application icon processing method and terminal device JP2021196599A|2021-12-27|Method and apparatus for outputting information CN211264953U|2020-08-14|Intelligent interrogation system CN105223455B|2018-04-10|Safety monitoring system, method and portable electric appts CN110168511B|2021-02-09|Electronic equipment and method and device for reducing power consumption US20220005467A1|2022-01-06|Electronic apparatus and controlling method therefor Liu et al.2021|Design and Realization of Dialect Interaction System Based on VAD CN112328308A|2021-02-05|Method and device for recognizing text CN111345792A|2020-06-30|Health monitoring method and device in sleep state and storage medium CN110825906A|2020-02-21|English listening comprehension analysis method and system Yang et al.2014|The Design and Realization of Remote Voice Control System Based on the Intelligent Home
同族专利:
公开号 | 公开日 US9564131B2|2017-02-07| EP3748631A2|2020-12-09| BR112014013832A8|2017-06-13| CN104254884B|2017-10-24| US20170116992A1|2017-04-27| WO2013085507A1|2013-06-13| EP2788978A1|2014-10-15| US20190385612A1|2019-12-19| KR20180137041A|2018-12-26| KR20220002750A|2022-01-06| KR20200074260A|2020-06-24| EP2788978B1|2020-09-23| CN104254884A|2014-12-31| BR112014013832A2|2017-06-13| US20210304770A1|2021-09-30| KR20160036104A|2016-04-01| IN2014CN04097A|2015-07-10| KR20140106656A|2014-09-03| EP3748631A3|2021-05-05| US11069360B2|2021-07-20| US20150162002A1|2015-06-11| US10381007B2|2019-08-13| EP2788978A4|2015-10-28| JP2015501106A|2015-01-08|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US5278944A|1992-07-15|1994-01-11|Kokusai Electric Co., Ltd.|Speech coding circuit| JPH07121195A|1993-10-25|1995-05-12|Sony Corp|Sound processing digital signal processor| JPH07244494A|1994-03-04|1995-09-19|Ricoh Co Ltd|Sound recognizing device| US6070140A|1995-06-05|2000-05-30|Tran; Bao Q.|Speech recognizer| US5721938A|1995-06-07|1998-02-24|Stuckey; Barbara K.|Method and device for parsing and analyzing natural language sentences and text| JP3674990B2|1995-08-21|2005-07-27|セイコーエプソン株式会社|Speech recognition dialogue apparatus and speech recognition dialogue processing method| US5946050A|1996-10-04|1999-08-31|Samsung Electronics Co., Ltd.|Keyword listening device| JP3726448B2|1997-03-12|2005-12-14|セイコーエプソン株式会社|Recognition target speech detection method and apparatus| US6101473A|1997-08-08|2000-08-08|Board Of Trustees, Leland Stanford Jr., University|Using speech recognition to access the internet, including access via a telephone| KR20000075828A|1997-12-30|2000-12-26|요트.게.아. 롤페즈|Speech recognition device using a command lexicon| GB2342828A|1998-10-13|2000-04-19|Nokia Mobile Phones Ltd|Speech parameter compression; distributed speech recognition| DE69941686D1|1999-01-06|2010-01-07|Koninkl Philips Electronics Nv|LANGUAGE ENTRY WITH ATTENTION SPAN| US6408272B1|1999-04-12|2002-06-18|General Magic, Inc.|Distributed voice user interface| US6332120B1|1999-04-20|2001-12-18|Solana Technology Development Corporation|Broadcast speech recognition system for keyword monitoring| US6393572B1|1999-04-28|2002-05-21|Philips Electronics North America Corporation|Sleepmode activation in a slave device| JP2000315097A|1999-04-30|2000-11-14|Canon Inc|Electronic equipment, its controlling method and recording medium| US6594630B1|1999-11-19|2003-07-15|Voice Signal Technologies, Inc.|Voice-activated control for electrical device| US8108218B1|1999-12-13|2012-01-31|Avaya Inc.|Methods and apparatus for voice recognition for call treatment modification on messaging| KR100340045B1|1999-12-24|2002-06-12|오길록|Apparatus For Low Power Personal Information Terminal With Voice Command Functionality and Method for Voice Recognition using it| KR100447667B1|2000-04-12|2004-09-08|이경목|Interactive Language Teaching System consisted of Dolls and Voice Recognizing Computer| JP2002123283A|2000-10-12|2002-04-26|Nissan Motor Co Ltd|Voice recognition operating device| US20020077830A1|2000-12-19|2002-06-20|Nokia Corporation|Method for activating context sensitive speech recognition in a terminal| WO2002073600A1|2001-03-14|2002-09-19|International Business Machines Corporation|Method and processor system for processing of an audio signal| US8266451B2|2001-08-31|2012-09-11|Gemalto Sa|Voice activated smart card| US6895257B2|2002-02-18|2005-05-17|Matsushita Electric Industrial Co., Ltd.|Personalized agent for portable devices and cellular phone| KR100594140B1|2002-04-13|2006-06-28|삼성전자주식회사|Packet data service method of wireless communication system| TWI225640B|2002-06-28|2004-12-21|Samsung Electronics Co Ltd|Voice recognition device, observation probability calculating device, complex fast fourier transform calculation device and method, cache device, and method of controlling the cache device| JP2004226698A|2003-01-23|2004-08-12|Yaskawa Electric Corp|Speech recognition device| JP2004265217A|2003-03-03|2004-09-24|Nec Corp|Mobile communication terminal having voice recognition function and keyword retrieval method using the same terminal| KR20050110021A|2003-03-17|2005-11-22|코닌클리케 필립스 일렉트로닉스 엔.브이.|Method for remote control of an audio device| JP2004294946A|2003-03-28|2004-10-21|Toshiba Corp|Portable electronic equipment| JP4301896B2|2003-08-22|2009-07-22|シャープ株式会社|Signal analysis device, voice recognition device, program, recording medium, and electronic device| TWI222837B|2003-10-14|2004-10-21|Design Technology Inc G|Audio signal compression processing device to with reduced power consumption| TWI235358B|2003-11-21|2005-07-01|Acer Inc|Interactive speech method and system thereof| EP1562180B1|2004-02-06|2015-04-01|Nuance Communications, Inc.|Speech dialogue system and method for controlling an electronic device| CN100365727C|2004-03-26|2008-01-30|胡红|Radio induction audio player| US20060074658A1|2004-10-01|2006-04-06|Siemens Information And Communication Mobile, Llc|Systems and methods for hands-free voice-activated devices| EP1844464B1|2005-02-03|2013-06-26|Nuance Communications, Inc.|Methods and apparatus for automatically extending the voice-recognizer vocabulary of mobile communications devices| US20090222270A2|2006-02-14|2009-09-03|Ivc Inc.|Voice command interface device| JP2007219207A|2006-02-17|2007-08-30|Fujitsu Ten Ltd|Speech recognition device| JP2007255897A|2006-03-20|2007-10-04|Clarion Co Ltd|Navigation system and apparatus and its control method and program| US7756708B2|2006-04-03|2010-07-13|Google Inc.|Automatic language model update| US8117268B2|2006-04-05|2012-02-14|Jablokov Victor R|Hosted voice recognition system for wireless devices| US20070254680A1|2006-05-01|2007-11-01|Shary Nassimi|Wirefree intercom having low power system and process| KR100744301B1|2006-06-01|2007-07-30|삼성전자주식회사|Mobile terminal for changing operation mode by using speech recognition and a method thereof| US8207936B2|2006-06-30|2012-06-26|Sony Ericsson Mobile Communications Ab|Voice remote control| EP1879000A1|2006-07-10|2008-01-16|Harman Becker Automotive Systems GmbH|Transmission of text messages by navigation systems| DE602006005830D1|2006-11-30|2009-04-30|Harman Becker Automotive Sys|Interactive speech recognition system| US8150044B2|2006-12-31|2012-04-03|Personics Holdings Inc.|Method and device configured for sound signature detection| US9760146B2|2007-01-08|2017-09-12|Imagination Technologies Limited|Conditional activation and deactivation of a microprocessor| WO2008095167A2|2007-02-01|2008-08-07|Personics Holdings Inc.|Method and device for audio recording| US7818176B2|2007-02-06|2010-10-19|Voicebox Technologies, Inc.|System and method for selecting and presenting advertisements based on natural language processing of voice-based input| US20080221884A1|2007-03-07|2008-09-11|Cerra Joseph P|Mobile environment speech processing facility| US7774626B2|2007-03-29|2010-08-10|Intel Corporation|Method to control core duty cycles using low power modes| EP1978765A1|2007-04-02|2008-10-08|BRITISH TELECOMMUNICATIONS public limited company|Power management scheme for mobile communication devices| WO2008124786A2|2007-04-09|2008-10-16|Personics Holdings Inc.|Always on headwear recording system| JP2008309864A|2007-06-12|2008-12-25|Fujitsu Ten Ltd|Voice recognition device and voice recognition method| CN101076099B|2007-06-14|2010-06-09|北京中星微电子有限公司|Method and device for controlling video record and synchronized-controlling unit| US20090055005A1|2007-08-23|2009-02-26|Horizon Semiconductors Ltd.|Audio Processor| US8725520B2|2007-09-07|2014-05-13|Qualcomm Incorporated|Power efficient batch-frame audio decoding apparatus, system and method| CN101483683A|2008-01-08|2009-07-15|宏达国际电子股份有限公司|Handhold apparatus and voice recognition method thereof| US8554550B2|2008-01-28|2013-10-08|Qualcomm Incorporated|Systems, methods, and apparatus for context processing using multi resolution analysis| US8050932B2|2008-02-20|2011-11-01|Research In Motion Limited|Apparatus, and associated method, for selecting speech COder operational rates| US20090234655A1|2008-03-13|2009-09-17|Jason Kwon|Mobile electronic device with active speech recognition| KR20090107365A|2008-04-08|2009-10-13|엘지전자 주식회사|Mobile terminal and its menu control method| JP5327838B2|2008-04-23|2013-10-30|Necインフロンティア株式会社|Voice input distributed processing method and voice input distributed processing system| US8244528B2|2008-04-25|2012-08-14|Nokia Corporation|Method and apparatus for voice activity determination| CA2665055C|2008-05-23|2018-03-06|Accenture Global Services Gmbh|Treatment processing of a plurality of streaming voice signals for determination of responsive action thereto| US8488799B2|2008-09-11|2013-07-16|Personics Holdings Inc.|Method and system for sound monitoring over a network| US20120010890A1|2008-12-30|2012-01-12|Raymond Clement Koverzin|Power-optimized wireless communications device| JP4809454B2|2009-05-17|2011-11-09|株式会社半導体理工学研究センター|Circuit activation method and circuit activation apparatus by speech estimation| US20110066431A1|2009-09-15|2011-03-17|Mediatek Inc.|Hand-held input apparatus and input method for inputting data to a remote receiving device| JP2011071937A|2009-09-28|2011-04-07|Kyocera Corp|Electronic device| US20110099507A1|2009-10-28|2011-04-28|Google Inc.|Displaying a collection of interactive elements that trigger actions directed to an item| CN102118886A|2010-01-04|2011-07-06|中国移动通信集团公司|Recognition method of voice information and equipment| CN201752079U|2010-01-15|2011-02-23|硕呈科技股份有限公司|Fluctuation awakening device of standby power supply| US8682667B2|2010-02-25|2014-03-25|Apple Inc.|User profiling for selecting user specific voice input processing information| KR20110110434A|2010-04-01|2011-10-07|삼성전자주식회사|Low power audio play device and mehod| KR101733205B1|2010-04-05|2017-05-08|삼성전자주식회사|Audio decoding system and audio decoding method thereof| US9112989B2|2010-04-08|2015-08-18|Qualcomm Incorporated|System and method of smart audio logging for mobile devices| US8359020B2|2010-08-06|2013-01-22|Google Inc.|Automatically monitoring for voice input based on context| CN101968791B|2010-08-10|2012-12-26|深圳市飘移网络技术有限公司|Data storage method and device| CN101938391A|2010-08-31|2011-01-05|中山大学|Voice processing method, system, remote controller, set-top box and cloud server| US8606293B2|2010-10-05|2013-12-10|Qualcomm Incorporated|Mobile device location estimation using environmental information| US9443511B2|2011-03-04|2016-09-13|Qualcomm Incorporated|System and method for recognizing environmental sound| US8798995B1|2011-09-23|2014-08-05|Amazon Technologies, Inc.|Key word determinations from voice data| US8924219B1|2011-09-30|2014-12-30|Google Inc.|Multi hotword robust continuous voice command detection in mobile devices| US9992745B2|2011-11-01|2018-06-05|Qualcomm Incorporated|Extraction and analysis of buffered audio data using multiple codec rates each greater than a low-power processor rate| US9031847B2|2011-11-15|2015-05-12|Microsoft Technology Licensing, Llc|Voice-controlled camera operations| US8666751B2|2011-11-17|2014-03-04|Microsoft Corporation|Audio pattern matching for device activation| WO2013085507A1|2011-12-07|2013-06-13|Hewlett-Packard Development Company, L.P.|Low power integrated circuit to analyze a digitized audio stream| DE102013001219B4|2013-01-25|2019-08-29|Inodyn Newmedia Gmbh|Method and system for voice activation of a software agent from a standby mode| US10672387B2|2017-01-11|2020-06-02|Google Llc|Systems and methods for recognizing user speech| US10405082B2|2017-10-23|2019-09-03|Staton Techiya, Llc|Automatic keyword pass-through system|US9992745B2|2011-11-01|2018-06-05|Qualcomm Incorporated|Extraction and analysis of buffered audio data using multiple codec rates each greater than a low-power processor rate| WO2013085507A1|2011-12-07|2013-06-13|Hewlett-Packard Development Company, L.P.|Low power integrated circuit to analyze a digitized audio stream| US9704486B2|2012-12-11|2017-07-11|Amazon Technologies, Inc.|Speech recognition power management| US20150074524A1|2013-09-10|2015-03-12|LenovoPte. Ltd.|Management of virtual assistant action items| US9245527B2|2013-10-11|2016-01-26|Apple Inc.|Speech recognition wake-up of a handheld portable electronic device| US10079019B2|2013-11-12|2018-09-18|Apple Inc.|Always-on audio control for mobile device| US9460735B2|2013-12-28|2016-10-04|Intel Corporation|Intelligent ancillary electronic device| US9653079B2|2015-02-12|2017-05-16|Apple Inc.|Clock switching in always-on component| US10452339B2|2015-06-05|2019-10-22|Apple Inc.|Mechanism for retrieval of previously captured audio| US10403279B2|2016-12-21|2019-09-03|Avnera Corporation|Low-power, always-listening, voice command detection and capture| US10319375B2|2016-12-28|2019-06-11|Amazon Technologies, Inc.|Audio message extraction| US10121494B1|2017-03-30|2018-11-06|Amazon Technologies, Inc.|User presence detection| US11270198B2|2017-07-31|2022-03-08|Syntiant|Microcontroller interface for audio signal processing| CN107895573B|2017-11-15|2021-08-24|百度在线网络技术(北京)有限公司|Method and device for identifying information| CN111742330A|2017-12-28|2020-10-02|森田公司|Always-on keyword detector| US11269592B2|2020-02-19|2022-03-08|Qualcomm Incorporated|Systems and techniques for processing keywords in audio data|
法律状态:
2018-12-18| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2019-09-10| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-03-16| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-05-25| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 07/12/2011, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 PCT/US2011/063804|WO2013085507A1|2011-12-07|2011-12-07|Low power integrated circuit to analyze a digitized audio stream| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|